Nakagami distribution#
The Nakagami distribution (nakagami) is a continuous distribution on \(x \ge 0\) used heavily in wireless communications to model the envelope (magnitude) of a fading signal. It is particularly convenient because its squared value is Gamma-distributed.
1) Title & classification#
Item |
Value |
|---|---|
Name |
Nakagami ( |
Type |
Continuous |
Support |
\(x \in [0,\infty)\) |
Parameters |
shape \(m \ge 1/2\), spread \(\Omega > 0\) |
Common reparameterization |
scale \(s=\sqrt{\Omega}\) (SciPy’s |
What you’ll be able to do after this notebook#
recognize when a Nakagami model makes sense (fading envelopes, positive magnitudes)
write the PDF/CDF and compute key moments (including skewness/kurtosis)
interpret how \((m,\Omega)\) affects shape and scale
derive mean/variance and the log-likelihood (including MLE equations)
sample from a Nakagami distribution using NumPy only via a Gamma sampler
use
scipy.stats.nakagamiforpdf,cdf,rvs, andfit
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
import os
import plotly.io as pio
from scipy import stats
from scipy.optimize import root_scalar
from scipy.special import gammainc, gammaln, hyp1f1, psi
pio.templates.default = "plotly_white"
pio.renderers.default = os.environ.get("PLOTLY_RENDERER", "notebook")
np.set_printoptions(precision=4, suppress=True)
rng = np.random.default_rng(42)
2) Intuition & Motivation#
What it models#
Nakagami is most commonly used for nonnegative magnitudes (envelopes). In wireless fading, if a complex baseband channel coefficient is
with random phase \(\Phi\), then the received amplitude is controlled by \(X \sim \mathrm{Nakagami}(m,\Omega)\).
\(m\) (“fading parameter”) controls how severe the fading is.
\(\Omega\) is the average power (in fact, \(\mathbb{E}[X^2]=\Omega\)).
Typical use cases#
Wireless communications: Nakagami-\(m\) fading envelopes and link-level simulations (often used as a flexible alternative to Rayleigh/Rician).
Radar/sonar: scattered amplitude models.
Imaging / texture: a flexible positive amplitude model when Rayleigh is too restrictive.
Relations to other distributions#
A key identity is that the power \(Y=X^2\) is Gamma-distributed:
This link is the main computational advantage of the distribution.
Special cases / connections:
\(m=1/2\) gives the half-normal distribution (with scale \(\sigma=\sqrt{\Omega}\)).
\(m=1\) gives the Rayleigh distribution (with Rayleigh scale \(\sigma=\sqrt{\Omega/2}\)).
with \(m=k/2\) and \(\Omega=k\), you recover the chi distribution with \(k\) degrees of freedom.
as \(m\to\infty\), \(X\) concentrates near \(\sqrt{\Omega}\) (fading “disappears”).
3) Formal Definition#
We use parameters \((m,\Omega)\) with \(m \ge 1/2\) and \(\Omega>0\).
PDF#
For \(x \ge 0\),
CDF#
Let \(\gamma(\cdot,\cdot)\) be the lower incomplete gamma function and \(P(a,z)=\gamma(a,z)/\Gamma(a)\) its regularized form. Then for \(x \ge 0\),
SciPy’s gammainc(m, z) implements the regularized \(P(m,z)\).
def _validate_nakagami_params(m, omega):
m = float(m)
omega = float(omega)
if not (m >= 0.5):
raise ValueError("m must be >= 0.5")
if not (omega > 0):
raise ValueError("omega must be > 0")
return m, omega
def nakagami_logpdf(x, m, omega):
m, omega = _validate_nakagami_params(m, omega)
x = np.asarray(x, dtype=float)
out = np.full_like(x, -np.inf, dtype=float)
pos = x > 0
if np.any(pos):
out[pos] = (
np.log(2.0)
+ m * np.log(m)
- gammaln(m)
- m * np.log(omega)
+ (2.0 * m - 1.0) * np.log(x[pos])
- (m / omega) * (x[pos] ** 2)
)
# Handle x==0 (only finite at m=1/2)
zero = x == 0
if np.any(zero) and m == 0.5:
out[zero] = 0.5 * np.log(2.0) - 0.5 * np.log(np.pi) - 0.5 * np.log(omega)
return out
def nakagami_pdf(x, m, omega):
return np.exp(nakagami_logpdf(x, m, omega))
def nakagami_cdf(x, m, omega):
m, omega = _validate_nakagami_params(m, omega)
x = np.asarray(x, dtype=float)
out = np.zeros_like(x, dtype=float)
nonneg = x >= 0
if np.any(nonneg):
t = (m / omega) * (x[nonneg] ** 2)
out[nonneg] = gammainc(m, t)
return out
4) Moments & Properties#
A very useful identity is \(Y=X^2 \sim \mathrm{Gamma}(m,\ \text{scale}=\Omega/m)\).
Raw moments#
For any \(k>-2m\) (so the integral converges),
In particular,
\(\mathbb{E}[X] = \dfrac{\Gamma(m+1/2)}{\Gamma(m)}\sqrt{\dfrac{\Omega}{m}}\)
\(\mathbb{E}[X^2] = \Omega\)
\(\mathrm{Var}(X) = \Omega - \mathbb{E}[X]^2\)
The mode (for \(m>1/2\)) follows from maximizing the log-PDF:
Skewness and kurtosis#
Using the raw moments above (up to order 4) you can compute central moments and then
where \(\mu_r\) are central moments and \(\sigma^2=\mu_2\).
MGF / characteristic function#
Because the tail is Gaussian-like (\(\exp(-c x^2)\)), the MGF \(M_X(t)=\mathbb{E}[e^{tX}]\) exists for all real \(t\). A closed form uses the confluent hypergeometric function \({}_1F_1\):
The characteristic function \(\varphi_X(t)=\mathbb{E}[e^{itX}]\) is the same expression with \(t^2\) replaced by \(-t^2\) and the second term multiplied by \(i\).
For the power \(Y=X^2\) (Gamma), the MGF is simple:
Entropy#
The differential entropy has a compact expression:
where \(\psi\) is the digamma function.
def nakagami_raw_moment(k, m, omega):
m, omega = _validate_nakagami_params(m, omega)
k = float(k)
return (omega / m) ** (k / 2.0) * np.exp(gammaln(m + k / 2.0) - gammaln(m))
def nakagami_mean(m, omega):
return nakagami_raw_moment(1.0, m, omega)
def nakagami_var(m, omega):
mu = nakagami_mean(m, omega)
return omega - mu**2
def nakagami_skew_kurt(m, omega):
m1 = nakagami_raw_moment(1.0, m, omega)
m2 = nakagami_raw_moment(2.0, m, omega)
m3 = nakagami_raw_moment(3.0, m, omega)
m4 = nakagami_raw_moment(4.0, m, omega)
var = m2 - m1**2
mu3 = m3 - 3.0 * m2 * m1 + 2.0 * m1**3
mu4 = m4 - 4.0 * m3 * m1 + 6.0 * m2 * m1**2 - 3.0 * m1**4
skew = mu3 / (var ** 1.5)
kurt = mu4 / (var**2)
return skew, kurt
def nakagami_mgf(t, m, omega):
m, omega = _validate_nakagami_params(m, omega)
t = np.asarray(t, dtype=float)
z = omega * (t**2) / (4.0 * m)
gamma_ratio = np.exp(gammaln(m + 0.5) - gammaln(m))
term1 = hyp1f1(m, 0.5, z)
term2 = t * np.sqrt(omega / m) * gamma_ratio * hyp1f1(m + 0.5, 1.5, z)
return term1 + term2
def nakagami_cf(t, m, omega):
m, omega = _validate_nakagami_params(m, omega)
t = np.asarray(t, dtype=float)
z = -omega * (t**2) / (4.0 * m)
gamma_ratio = np.exp(gammaln(m + 0.5) - gammaln(m))
term1 = hyp1f1(m, 0.5, z)
term2 = 1j * t * np.sqrt(omega / m) * gamma_ratio * hyp1f1(m + 0.5, 1.5, z)
return term1 + term2
def nakagami_entropy(m, omega):
m, omega = _validate_nakagami_params(m, omega)
return (
m
- np.log(2.0)
+ gammaln(m)
+ 0.5 * np.log(omega / m)
- (m - 0.5) * psi(m)
)
m_ex, omega_ex = 2.0, 1.5
mu_ex = nakagami_mean(m_ex, omega_ex)
var_ex = nakagami_var(m_ex, omega_ex)
skew_ex, kurt_ex = nakagami_skew_kurt(m_ex, omega_ex)
entropy_ex = nakagami_entropy(m_ex, omega_ex)
mu_ex, var_ex, skew_ex, kurt_ex, (kurt_ex - 3.0), entropy_ex
(1.1512425464397995,
0.17464059926680608,
0.40569507726266335,
3.0592950893995936,
0.059295089399593603,
0.5288352805664636)
5) Parameter Interpretation#
The spread parameter \(\Omega\)#
\(\Omega\) is literally the second moment:
Changing \(\Omega\) mostly rescales the distribution: if \(X\sim\mathrm{Nakagami}(m,\Omega)\) then \(X/\sqrt{\Omega}\sim\mathrm{Nakagami}(m,1)\).
The shape parameter \(m\)#
\(m\) controls how concentrated the distribution is away from 0:
\(m = 1/2\): half-normal (density is positive at 0).
\(m = 1\): Rayleigh (classical “fully developed” multipath fading envelope).
larger \(m\): less fading; the distribution becomes more peaked near \(\sqrt{\Omega}\).
as \(m\to\infty\): \(X\) concentrates near \(\sqrt{\Omega}\) (variance shrinks roughly like \(\Omega/(2m)\)).
# PDF: changing m (keep omega fixed)
omega = 1.0
ms = [0.5, 1.0, 2.0, 5.0]
x = np.linspace(0, 4.0, 600)
fig = go.Figure()
for m in ms:
fig.add_trace(
go.Scatter(
x=x,
y=nakagami_pdf(x, m, omega),
mode="lines",
name=f"m={m:g}",
)
)
fig.update_layout(
title="Nakagami PDF: effect of shape m (Ω=1)",
xaxis_title="x",
yaxis_title="density",
)
fig.show()
# PDF: changing omega (keep m fixed)
m = 2.0
omegas = [0.5, 1.0, 2.0]
x = np.linspace(0, 6.0, 700)
fig = go.Figure()
for omega in omegas:
fig.add_trace(
go.Scatter(
x=x,
y=nakagami_pdf(x, m, omega),
mode="lines",
name=f"Ω={omega:g}",
)
)
fig.update_layout(
title="Nakagami PDF: effect of Ω (m=2)",
xaxis_title="x",
yaxis_title="density",
)
fig.show()
6) Derivations#
Deriving the Gamma link (\(X^2\) is Gamma)#
Let \(X\) have the Nakagami PDF and define \(Y=X^2\). With \(x=\sqrt{y}\) we have
So the density of \(Y\) is
which is exactly \(\mathrm{Gamma}(\text{shape}=m,\ \text{scale}=\Omega/m)\).
Expectation (general moment)#
Using \(Y=X^2\),
Plugging \(k=1\) yields the mean, and \(k=2\) yields \(\mathbb{E}[X^2]=\Omega\).
Variance#
Likelihood (and MLE equations)#
For data \(x_i\ge 0\), the log-likelihood is
Fixing \(m\), maximizing over \(\Omega\) gives the closed-form estimator \(\widehat{\Omega} = \frac{1}{n}\sum_i x_i^2\).
Plugging \(\widehat{\Omega}\) in and differentiating w.r.t. \(m\) leads to the standard Gamma shape equation (with \(y_i=x_i^2\)):
This must be solved numerically. Because the Nakagami model restricts \(m\ge 1/2\), the MLE may land on the boundary \(m=1/2\) if the unconstrained optimum is smaller.
def nakagami_loglik(x, m, omega):
x = np.asarray(x, dtype=float)
if np.any(x < 0):
return -np.inf
return float(np.sum(nakagami_logpdf(x, m, omega)))
def nakagami_mle(x):
"""Constrained MLE for (m, omega) with m>=0.5.
Uses the Gamma shape MLE equation on y=x^2.
"""
x = np.asarray(x, dtype=float)
if np.any(x < 0):
raise ValueError("data must be >= 0")
y = x**2
omega_hat = float(np.mean(y))
if not (omega_hat > 0):
return 0.5, omega_hat
# Avoid log(0) if data contain exact zeros (rare for a continuous model)
y_pos = y[y > 0]
if y_pos.size == 0:
return 0.5, omega_hat
s = np.log(omega_hat) - float(np.mean(np.log(y_pos)))
# If s ~ 0, the sample has extremely low dispersion; m -> infinity.
if s <= 1e-12:
return 1e6, omega_hat
def g(m):
return np.log(m) - psi(m) - s
lo = 0.5
if g(lo) <= 0:
return lo, omega_hat
hi = 1.0
while g(hi) > 0 and hi < 1e6:
hi *= 2.0
if g(hi) > 0:
return hi, omega_hat
sol = root_scalar(g, bracket=(lo, hi), method="brentq")
return float(sol.root), omega_hat
# Quick sanity check: recover parameters from synthetic data
m_true, omega_true = 2.0, 1.5
data = stats.nakagami(m_true, scale=np.sqrt(omega_true)).rvs(size=4000, random_state=rng)
m_hat, omega_hat = nakagami_mle(data)
m_hat, omega_hat
(2.055715894644666, 1.5069096363629064)
7) Sampling & Simulation (NumPy-only)#
The Gamma connection gives a simple sampler:
Sample \(Y\sim\mathrm{Gamma}(m,\ \text{scale}=\Omega/m)\).
Return \(X = \sqrt{Y}\).
So the main task is sampling a Gamma with shape \(m\).
Marsaglia–Tsang sketch (Gamma sampler)#
For shape \(\alpha\ge 1\) (here \(\alpha=m\)):
Set \(d = \alpha - 1/3\), \(c = 1/\sqrt{9d}\).
Repeat:
draw \(Z\sim\mathcal{N}(0,1)\) and set \(V=(1+cZ)^3\).
draw \(U\sim\mathrm{Uniform}(0,1)\).
accept \(dV\) if a (cheap) squeeze test passes, otherwise accept using the log test.
For \(0<\alpha<1\) use the reduction:
If \(Y\sim\mathrm{Gamma}(\alpha+1,\theta)\) and \(U\sim\mathrm{Uniform}(0,1)\) independent, then \(X = Y\,U^{1/\alpha} \sim \mathrm{Gamma}(\alpha,\theta)\).
Below is a reference implementation using NumPy only.
def _gamma_rvs_mt_shape_ge_1(alpha, n, rng):
"""Marsaglia–Tsang sampler for Gamma(alpha, 1) with alpha >= 1."""
d = alpha - 1.0 / 3.0
c = 1.0 / np.sqrt(9.0 * d)
out = np.empty(n, dtype=float)
filled = 0
while filled < n:
m = n - filled
z = rng.normal(size=m)
v = (1.0 + c * z) ** 3
valid = v > 0
if not np.any(valid):
continue
z = z[valid]
v = v[valid]
u = rng.random(size=v.size)
accept = (u < 1.0 - 0.0331 * (z**4)) | (
np.log(u) < 0.5 * z**2 + d * (1.0 - v + np.log(v))
)
accepted = d * v[accept]
k = accepted.size
out[filled : filled + k] = accepted
filled += k
return out
def gamma_rvs_numpy(alpha, theta=1.0, size=1, rng=None):
"""Sample from Gamma(alpha, theta) using NumPy only."""
if rng is None:
rng = np.random.default_rng()
alpha = float(alpha)
theta = float(theta)
if alpha <= 0 or theta <= 0:
raise ValueError("alpha and theta must be > 0")
size_tuple = (size,) if isinstance(size, int) else tuple(size)
n = int(np.prod(size_tuple))
if alpha >= 1:
x = _gamma_rvs_mt_shape_ge_1(alpha, n, rng)
else:
# Boost shape to alpha+1 >= 1, then apply the U^(1/alpha) correction
y = _gamma_rvs_mt_shape_ge_1(alpha + 1.0, n, rng)
u = rng.random(size=n)
x = y * (u ** (1.0 / alpha))
return (x * theta).reshape(size_tuple)
def nakagami_rvs_numpy(m, omega=1.0, size=1, rng=None):
"""Sample from Nakagami(m, omega) using NumPy only."""
m, omega = _validate_nakagami_params(m, omega)
if rng is None:
rng = np.random.default_rng()
y = gamma_rvs_numpy(m, theta=omega / m, size=size, rng=rng)
return np.sqrt(y)
# Smoke test: mean and second moment roughly match theory
m_test, omega_test = 2.0, 1.5
s = nakagami_rvs_numpy(m_test, omega_test, size=60_000, rng=rng)
s.mean(), nakagami_mean(m_test, omega_test), (s**2).mean(), omega_test
(1.1525436945219076, 1.1512425464397995, 1.5038743907493182, 1.5)
8) Visualization#
We’ll visualize:
the PDF and how it compares to a Monte Carlo histogram
the CDF and how it compares to an empirical CDF
We’ll use the NumPy-only sampler from above.
x = np.linspace(0, 4.0, 600)
omega = 1.0
ms = [0.5, 1.0, 2.0, 5.0]
fig_pdf = go.Figure()
for m in ms:
fig_pdf.add_trace(
go.Scatter(x=x, y=nakagami_pdf(x, m, omega), mode="lines", name=f"m={m:g}")
)
fig_pdf.update_layout(
title="Nakagami PDF (Ω=1)",
xaxis_title="x",
yaxis_title="density",
)
fig_pdf.show()
fig_cdf = go.Figure()
for m in ms:
fig_cdf.add_trace(
go.Scatter(x=x, y=nakagami_cdf(x, m, omega), mode="lines", name=f"m={m:g}")
)
fig_cdf.update_layout(
title="Nakagami CDF (Ω=1)",
xaxis_title="x",
yaxis_title="CDF",
)
fig_cdf.show()
def ecdf(samples):
x = np.sort(np.asarray(samples))
y = np.arange(1, x.size + 1) / x.size
return x, y
m_viz, omega_viz = 2.0, 1.5
n_viz = 80_000
samples = nakagami_rvs_numpy(m_viz, omega_viz, size=n_viz, rng=rng)
x_max = float(np.quantile(samples, 0.995))
x_grid = np.linspace(0.0, x_max, 600)
# PDF + histogram
fig = go.Figure()
fig.add_trace(
go.Histogram(
x=samples,
nbinsx=70,
histnorm="probability density",
name="Monte Carlo samples",
opacity=0.55,
)
)
fig.add_trace(
go.Scatter(
x=x_grid,
y=nakagami_pdf(x_grid, m_viz, omega_viz),
mode="lines",
name="Theoretical PDF",
line=dict(width=3),
)
)
fig.update_layout(
title=f"Nakagami(m={m_viz:g}, Ω={omega_viz:g}): histogram vs PDF",
xaxis_title="x",
yaxis_title="density",
bargap=0.02,
)
fig.show()
# CDF + empirical CDF
xs, ys = ecdf(samples)
fig = go.Figure()
fig.add_trace(
go.Scatter(
x=x_grid,
y=nakagami_cdf(x_grid, m_viz, omega_viz),
mode="lines",
name="Theoretical CDF",
line=dict(width=3),
)
)
fig.add_trace(
go.Scatter(
x=xs[::100],
y=ys[::100],
mode="markers",
name="Empirical CDF (subsampled)",
marker=dict(size=5),
)
)
fig.update_layout(
title=f"Nakagami(m={m_viz:g}, Ω={omega_viz:g}): empirical CDF vs CDF",
xaxis_title="x",
yaxis_title="CDF",
)
fig.show()
# Monte Carlo moment check
sample_mean = samples.mean()
sample_m2 = (samples**2).mean()
theory_mean = nakagami_mean(m_viz, omega_viz)
theory_m2 = omega_viz
sample_mean, theory_mean, sample_m2, theory_m2
(1.1524272744432116, 1.1512425464397995, 1.5024354377047782, 1.5)
9) SciPy Integration#
SciPy implements the Nakagami distribution as scipy.stats.nakagami.
Parameter mapping:
SciPy’s shape parameter is called
nuand corresponds to our \(m\).SciPy’s
scalecorresponds to \(s=\sqrt{\Omega}\), i.e. \(\Omega = \text{scale}^2\).SciPy also includes a
locshift; for the standard Nakagami distribution you typically fixloc=0.
m_scipy, omega_scipy = 2.0, 1.5
dist = stats.nakagami(m_scipy, scale=np.sqrt(omega_scipy))
x = np.linspace(0, 4, 6)
np.c_[x, nakagami_pdf(x, m_scipy, omega_scipy), dist.pdf(x)]
array([[0. , 0. , 0. ],
[0.8 , 0.7755, 0.7755],
[1.6 , 0.4796, 0.4796],
[2.4 , 0.0227, 0.0227],
[3.2 , 0.0001, 0.0001],
[4. , 0. , 0. ]])
# rvs
data = dist.rvs(size=5000, random_state=rng)
# fit (fix loc=0 to match the standard support)
nu_hat, loc_hat, scale_hat = stats.nakagami.fit(data, floc=0)
m_hat_scipy = nu_hat
omega_hat_scipy = scale_hat**2
# Compare with the (m, omega) MLE derived from the Gamma link
m_hat_mle, omega_hat_mle = nakagami_mle(data)
(m_hat_scipy, omega_hat_scipy), (m_hat_mle, omega_hat_mle)
((1.988644559230805, 1.4934785377454523),
(1.988614048401301, 1.4934772333000144))
10) Statistical Use Cases#
Hypothesis testing#
A common question in fading models is whether the envelope is consistent with Rayleigh fading, i.e. \(m=1\).
\(H_0: m=1\) (Rayleigh)
\(H_1: m \ne 1\) (general Nakagami, \(m\ge 1/2\))
A likelihood-ratio test (LRT) uses
and (asymptotically) compares \(\Lambda\) to a \(\chi^2_1\) distribution.
Bayesian modeling#
Working with the power \(Y=X^2\) is often easiest. If you treat \(m\) as known and write the Gamma model in rate form
then a Gamma prior \(\lambda\sim\mathrm{Gamma}(a_0,b_0)\) is conjugate:
If \(m\) is also unknown, the model is no longer conjugate; typical approaches are numerical optimization, MCMC, or variational inference.
Generative modeling#
A simple complex fading coefficient can be generated as
This produces a circularly symmetric random complex coefficient with controllable envelope statistics.
# Example: LRT for H0: m=1 (Rayleigh) vs H1: m free
m_alt, omega_alt = 2.0, 1.0
x = nakagami_rvs_numpy(m_alt, omega_alt, size=3000, rng=rng)
# Null model: m fixed at 1, omega MLE is mean(x^2)
m0 = 1.0
omega0 = float(np.mean(x**2))
ll0 = nakagami_loglik(x, m0, omega0)
# Alternative: (m, omega) MLE
m1, omega1 = nakagami_mle(x)
ll1 = nakagami_loglik(x, m1, omega1)
lrt = 2.0 * (ll1 - ll0)
p_value = stats.chi2.sf(lrt, df=1)
(m1, omega1), lrt, p_value
((1.976324368518703, 1.0012587550407712),
674.9860946700712,
8.22779501891415e-149)
# Example: conjugate Bayesian update for omega when m is treated as known
y = x**2
m_known = 2.0
# Prior on lambda = m/omega (Gamma shape-rate)
a0, b0 = 2.0, 2.0
a_post = a0 + y.size * m_known
b_post = b0 + float(np.sum(y))
# Sample lambda ~ Gamma(shape=a_post, rate=b_post) and transform to omega = m/lambda
lambda_samps = rng.gamma(shape=a_post, scale=1.0 / b_post, size=50_000)
omega_samps = m_known / lambda_samps
np.quantile(omega_samps, [0.025, 0.5, 0.975])
array([0.9768, 1.0016, 1.0274])
11) Pitfalls#
Parameter constraints
In the Nakagami model, \(m\ge 1/2\) and \(\Omega>0\).
Many libraries (including SciPy) also allow a
locshift; if you want the standard support \([0,\infty)\), fixloc=0.
Parameterization mismatch
In this notebook, \(\Omega=\mathbb{E}[X^2]\).
In SciPy,
scale = \sqrt{\Omega}.
Numerical stability
Use
gammaln(log-gamma) and work in log space when computing likelihoods.When \(m\) is large, ratios like \(\Gamma(m+1/2)/\Gamma(m)\) should be computed via log-gamma differences.
If your data contain exact zeros (common due to rounding), terms like \(\log(x_i^2)\) can break MLE equations; treat zeros carefully or add a small jitter.
Model fit
Nakagami is a good model for envelopes, but not for signed quantities.
Check diagnostics (histograms, QQ plots, predictive checks) and consider mixture/alternate models if the tail or near-zero behavior is off.
12) Summary#
Nakagami is a continuous distribution on \([0,\infty)\) with parameters shape \(m\ge 1/2\) and spread \(\Omega>0\).
It is a standard model for fading envelopes in wireless channels; \(m\) controls fading severity and \(\Omega=\mathbb{E}[X^2]\) is average power.
Key identity: \(X^2\sim\mathrm{Gamma}(m,\ \text{scale}=\Omega/m)\), which yields moments, CDF, and efficient sampling.
SciPy provides
scipy.stats.nakagami(withscale=\sqrt{\Omega}) forpdf,cdf,rvs, andfit.
References#
Nakagami, M. (1960). The m-distribution — A general formula of intensity distribution of rapid fading.
Simon, M. K. and Alouini, M.-S. Digital Communication over Fading Channels (Nakagami-\(m\) fading).
Marsaglia, G. and Tsang, W. W. (2000). A Simple Method for Generating Gamma Variables.
SciPy documentation:
scipy.stats.nakagami